Key Takeaways:
- Border agencies need to comply with OMB Executive Memorandum M-24-10. With the right approach, they can establish even stronger AI governance and risk management practices.
- SAIC developed a holistic AI governance framework for our organization that is applicable to border agencies and can help them maintain strong public trust while complying with congressional mandates.
- The AI governance dialogue will continue, and border agencies should expect to adapt their approaches to the continually evolving AI landscape.
LISTEN TO THIS FEATURE:
OMB Executive Memorandum M-24-10 outlines new AI governance, innovation and risk management standards.
AI is driving a border security revolution. Agencies that manage borders are using it for automated threat detection, identity and document verification, and more. AI will play a key role in the border of the future given its value as a force multiplier to assist multiple Department of Homeland Security (DHS) mission objectives. But first, border agencies must get AI governance right. In fact, given a December deadline to comply with OMB Executive Memorandum M-24-10, AI governance should be a top priority. As such, here are five issues for border agency leaders to think about as they tackle AI governance.
A critical and constant balancing act
Click image to enlarge
Fairness and transparency in AI is critical for border agencies. National security, public safety and personal freedoms are at stake. This makes unbiased models, representative data and explainable decision-making essential. Border agency CIOs and CAIOs recognize this AI governance imperative. They must maintain strong public trust while complying with congressional mandates. At the same time, these leaders are in the tough position of having to develop AI governance while acquiring, building and scaling AI models.
AI governance for border agencies is a difficult balancing act. When it comes to how AI is developed, acquired and used, decisionmakers have to balance accuracy and bias, immediacy and privacy, and efficiency and safety while navigating tremendous complexity and nuance. Plus, they must do this well in the most challenging conditions possible—a high-risk, low-latency environment with no room for error and constant demand for quick action.
The OMB Executive Memo is welcome and comes at a good time for border agencies. It provides guidance to federal agencies about how to effectively plan for and deploy trustworthy AI systems, as directed by Executive Order 14110. But knowing how best to apply this guidance within border agencies will be challenging.
Achieve compliance—and more
The direction given in the Executive Memorandum creates an opportunity for CIOs and CAIOs in border agencies to establish a stronger AI governance and risk management foundation by integrating best practices from other established standards.
We used this value-added approach at SAIC to model a holistic AI governance framework for our organization. It’s applicable to border agencies as well—a way to exceed the mandate by balancing innovation with the public interest. While there is no one-size-fits-all AI governance and risk management approach, the fundamentals of this model are grounded in five initiatives.
1. Establish a cross-functional AI governance board
AI risk management requires more collaboration than traditional cyber risk management. Instead of solely protecting systems and data from unauthorized access, AI risk management must also consider how algorithms process and interpret human-generated data to make decisions and predictions or automate tasks. These capabilities can also create risks such as data bias or fairness and privacy concerns related to AI misinterpreting or misusing this human data.
Creating an AI governance board is a critical first step in managing such complex risk dynamics. For one, enterprise AI delivers benefits across an agency. The more centralized the data is, the more potential benefits there are. And ubiquitous AI adoption requires that all stakeholders accept AI governance. With an AI governance board, you can ensure that governance is informed by perspectives from across the agency.
What the best AI governance boards do:
Codify and communicate formal intake processes.
Create a plain-language AI glossary.
Educate the workforce on compliance guardrails.
Create a flexible governance model.
The AI governance board’s primary focus is to inform, evaluate and drive policies, practices, processes and communications channels related to AI risk. Boards help make the most of the AI opportunity within the context of the mission, building trust and transparency with the workforce, stakeholders and the public. Over time, they foster a common culture of informed, proactive AI risk management.
Border agencies should go beyond creating AI governance boards within a single agency. There are many agencies involved in different aspects of border management, from immigration and customs to counterterrorism and threat detection. Cross-agency participation is more challenging to convene and manage, but it is essential. Collaboration at this level is key to fully understanding how AI is being used in border management and how best to apply consistent and flexible governance principles across agencies with complementary missions.
2. Maintain an inventory of AI use cases and solutions
The executive order requires agencies to inventory AI use cases at least annually, submit them to OMB and post them on the agency website for full transparency. To do this well, border agencies should focus first on a data inventory. Data scientists within agencies need a comprehensive picture of all agency data through a centralized data catalog that is developed in adherence to requirements around how data is collected, stored and shared. The catalog helps agencies know what AI models they can realistically develop. This speaks to the symbiotic relationship between data and AI. Data needs AI to unlock its full value. And AI needs data to unlock its full value—along with guardrails to ensure it is used responsibly.
After the data inventory is completed, a border agency can turn to their AI inventory. The most comprehensive ones account for AI development projects, procured AI systems, AI-related R&D, data and information types, and AI actors. They also include project and development status, developer information, use case summary and context details, data sources, security classification of information, AI techniques used, responsible point of contact, and clarity on how public data is being used in AI models. The AI inventory requirements are also likely to regularly refresh over time, as we are already seeing efforts by lawmakers and the White House to revise reporting requirements with the goal of promoting greater transparency.
Conducting these inventories is typically time-consuming because there are many elements to log for each inventory item. While border agencies can conduct effective inventories using spreadsheets, automation streamlines this process. AI development platforms like SAIC’s no-code and low-code Tenjin are effective tools for managing AI assets. The added transparency meets a compliance mandate and provides the benefit of allowing CAIOs to assess the impacts of AI on mission delivery. Users do not have to be skilled data scientists, analysts or engineers to use Tenjin. And Tenjin allows cross-functional teams including coders, data scientists and subject matter experts to collaborate in the same workspace.
Tenjin’s governance capabilities further enable enterprise-grade governance and AI portfolio oversight with standardized project plans, risk and value assessments, a centralized AI inventory and model registry, as well as workflow management where users can document reviews and sign off on assessments. It also provides a persistent record of all user actions so that new users can audit the system as the team composition changes. It provides transparency on logical flows, providing much-needed explainability. This makes it a vital toolkit for border agencies working to comply with requirements outlined in the OMB Executive Memo.
3. Map, measure and mitigate AI risk
Safety-impacting AI has the potential to significantly impact the safety of human well-being, environmental safety, critical infrastructure, etc.
Rights-impacting AI serves as a principal basis for a decision or action concerning a specific individual or entity’s civil rights, right to privacy, access to critical resources, equal access to opportunities, etc.
Source: OMB Executive Memo M-24-10
With a completed inventory, the next step is to map the different levels of risk associated with each inventory item. While border agencies will have complex and varying levels of risk to catalog, this process offers key insight to help address the highest-risk AI use cases first.
The Executive Memorandum requires context-based and sociotechnical categorization of AI as part of this mapping. As defined by OMB, there is safety-impacting AI and rights-impacting AI. Safety-impacting AI has physical and material impact while rights-impacting AI affects personal liberties and justice. Border agencies must address both types of impacts. Sometimes, the safety and rights impacted become known only after the impact has occurred. But there is a lot that border agencies can do to assess the potential effects of AI ahead of its deployment. A good AI governance structure has provisions for monitoring AI algorithms in development and operation.
By taking mapping further, you can get an even more comprehensive understanding of risk. Consider using Federal Information Processing Standards Publication 199 (FIPS-199), which provides a standardized risk categorization method for information and information systems. In addition, NIST’s AI Risk Management Information Framework and accompanying standards provide best practices for assessing and mitigating identified AI risk. Looking ahead, several pieces of pending legislation in Congress point to the NIST framework as lawmakers seek to enact risk management requirements. Again, AI-powered tools like Tenjin can help you turn this risk mapping into an interactive management tool.
Border agencies can extend risk mapping even further with AI scorecards. These scorecards provide a granular look at how AI models perform against specific parameters. For example, a scorecard for an AI-powered facial recognition tool could track attributes like accuracy metrics, performance benchmarks, error handing and user feedback. AI scorecards not only help border agencies understand and manage risk, they support transparency across agencies and with the public. They also help agencies specify requirements to industry partners that tie directly to mission use cases.
SAIC’s Identity and Data Sciences Lab (IDSL), which staffs and operates the Maryland Test Facility, is at the forefront of testing and evaluating the AI-powered systems to ensure they work well, are fair, and are easy to use. The IDSL has developed AI scorecards to report on performance of face collection and face recognition systems in support of the DHS Secretary’s Management Directive 026-11. For over a decade, these teams have been trusted by U.S. government customers to perform high-quality test and evaluation activities in the field of biometric, identity, and other computer vision-based AI systems.
4. Grow AI literacy by educating the workforce
Although the Executive Memo does not expressly stipulate any requirements for educating the workforce in AI, it does require agency AI strategies to include an assessment of the agency’s AI workforce capacity and projected needs, as well as a plan to recruit, hire, train, retain and empower AI practitioners and achieve AI literacy for non-practitioners. Workforce education is a critical success factor that should be a part of any AI governance approach.
For one, most of the subject matter experts who will participate in the AI governance board, such as ethics officers, legal counsel, diversity and inclusion representatives and human resources officers, may not be technical personnel by training. However, to contribute the most value to the board and the governance development process, they should have a foundational understanding of AI technologies.
This understanding is equally important for the entire workforce. AI is a consequential technology. It will ultimately impact how work is done and how citizens are served across the federal sector. The more that employees know, the more they will appreciate the importance of complying with AI governance guardrails. Hands-on training and upskilling with a low-code, no-code AI platform like Tenjin that is accessible across skill levels is an excellent way to grow AI literacy across your workforce.
As border agencies focus on AI literacy, it is important not to lose sight of the importance of uniquely human skills in this space. Even as agencies explore more AI use cases and the technology continues to advance, human-and-machine teaming allows us to leverage AI for what it does best, which is looking for patterns and analyzing vast quantities of data and helping humans to do what we do best, which is making nuanced judgments in complex situations.
5. Continuously monitor the changing AI landscape
Proactive AI governance is never achieved in a single effort. By design, it should be iterative and evolving. This is possible through a combination of awareness and action.
Awareness is everyone’s responsibility. The AI governance board and senior leadership should track the regulatory, policy, research, legal and technological landscapes to determine if (and how) changes impact AI governance. Employees can track governance gaps from a more pragmatic, day-to-day perspective. For example, agency leaders and privacy offices can monitor citizen feedback about AI-powered services at the border to improve governance and transparency. Again, by creating real-world testing environments that include everyday citizens like the environment DHS has established with the Maryland Test Facility, DHS’s Science and Technology Directorate is able to collect immediate user feedback to help measure customer acceptance or improve customer experience.
To be able to pivot fast, AI governance boards should develop protocols and policies early on for how to quickly update governance, communicate changes and flex organizational structures and ways of working. Through all of this, lines of communication should be bidirectional, flowing seamlessly from senior leadership to employees and vice versa.
Border agencies need AI governance to maximize the potential of AI in border management. Compliance with the Executive Memorandum is the place to start. But by going beyond compliance, border agencies can do even more, creating a strong and resilient foundation for the border of the future.
Learn more
While there is a pending deadline to comply with the OMB Executive Memo, the AI governance dialogue should be ongoing and progressive in response to the ever-evolving AI landscape. To learn more about AI governance for border agencies, contact Craig McIntire and Shweta Mulcare.
Learn more about SAIC's border innovations